Implementing ArcGIS Blog - Page 3

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Latest Activity

(183 Posts)
DavidCrosby
Esri Contributor

Since the release of ArcGIS Enterprise 10.5.1, distributed collaboration has been a powerful way to connect and integrate your web GIS across a network of participants.  Distributed collaboration has made it possible to organize and share content between entities and organizations including departments, other governments, private businesses and more.

 

During the time since the release of distributed collaboration, the implementation of ArcGIS Enterprise “on premises” at customer sites has also grown.  And for good reason!  Deploying ArcGIS Enterprise at your site gives you complete control over your deployment while providing the core capabilities of organization-wide mapping, analysis, data management, sharing and collaboration.

 

While working with ArcGIS Enterprise customers I have discovered a common misconception regarding the ability of an internal-only ArcGIS Enterprise to collaborate with ArcGIS Online.  A common deployment pattern for ArcGIS Enterprise is to not deploy the system as public-facing at your site.  The system is deployed for internal customers only.  The general public cannot connect to this system over the public Internet and items, services, and data are all behind the organization’s firewall.  The misconception is that this internal-only ArcGIS Enterprise cannot sync (send and receive) items such as web maps, apps, and feature layers with ArcGIS Online.  It can!

 

In such a scenario, even though ArcGIS Online is considered the host of the collaboration, all communication is initiated by ArcGIS Enterprise from behind your firewall.  To enable this, organization firewall rules must be configured to support outbound communication on port 443.  A few other technical details include:

 

The following URLs must be whitelisted/reachable by the ArcGIS Enterprise machine:

 

The Portal for ArcGIS service account that runs the Portal for ArcGIS service needs access to the Internet.

 

Once the above requirements are met you may proceed to set up your collaboration.

 

It might seem counterintuitive that your internal ArcGIS Enterprise can sync with ArcGIS Online and even receive items.  It is not only possible, but can be a very effective way to extend the reach and use of your web GIS. 

more
4 4 3,398
by Anonymous User
Not applicable

Many organizations have installed and configured their deployments of ArcGIS Enterprise in the cloud and are required to incorporate disaster recovery plans to ensure the least amount of downtime in the event of a failure or catastrophe. There are various options out there for organizations to choose from to plan for a disaster recovery scenario. For the purpose of this blog post, I will walk through the steps for replicating a primary Enterprise deployment specifically in AWS to a warm, geographically redundant standby site using the WebGIS DR Utility.

Learn more on disaster recovery and replication in ArcGIS Enterprise

Much of this workflow has been covered extensively by my colleagues for on-premise deployments in another blog post Migrate to a new machine in ArcGIS Enterprise using the WebGIS DR tool. The general workflow for our scenario is very similar to the on-premise deployments with some added steps utilizing AWS services* – Application Load Balancer (ALB), Route 53, and S3 buckets – as well as the exclusion of using etc\host file entries in EC2 instances. Please read the blog post above to understand the overall workflow and prerequisites before proceeding with this workflow, as it will contain information not covered in this post.

* I will assume readers will have some prior experience with AWS and its services mentioned above.

Scenario

Let’s say we want to have a replicated and geographically redundant multi-machine deployment with the following components setup in AWS in the US East and US West regions: 

  • Portal for ArcGIS 
  • ArcGIS Hosting Server 
  • ArcGIS Data Store 
  • ArcGIS Geocode Server

Architectural design of base enterprise deployment with additional server

Route 53 Hosted Zones

Route 53 is a scalable Domain Name System (DNS) that utilizes a combination of public and private hosted zones, which are containers for records about how you want to route traffic for a specific domain. Public hosted zones are used to route traffic on the internet, while private hosted zones are used to route traffic within an Amazon VPC. The following workflow will be utilizing both types of hosted zones with overlapping namespaces. Therefore, in our scenario when we are logged into an EC2 instance in a VPC that is associated with a private hosted zone:

  • The Resolver will evaluate whether the name of the private hosted zone matches the domain name in the request. If there is no match, the Resolver will forward the request to a public DNS resolver.
  • If there is a match in the domain name of the request, the hosted zone is searched for a record that matches the domain name. If there is no record in the matching private hosted zone, the Resolver does not forward the request to a public DNS resolver, but will return a non-existent domain (NXDOMAIN) to the client.

That last bullet is very important! If you have other applications in the same VPC and you setup a private hosted zone, please ensure to add records for those applications so that those application remain completely operational.

Before Deployment

Before proceeding with deploying ArcGIS Enterprise, we need to complete the following tasks using the AWS services mentioned above in order to maintain a consistent DNS across the primary and standby sites:

 

  1. Create an ALB in both regions that will manage traffic for the ArcGIS Enterprise deployments. A few things to note: 
    1. Ensure to attach the appropriate certificate from the public domain registered in Route 53.
    2. A target group must be created when configuring a new load balancer. Create a target group for Portal at this time. It does not need to be registered to any targets with this target group.
  2. In Route 53 under your Public Hosted Zone:
    1. Create two identical Record Sets for each of the load balancers using CNAME types. One thing to note here: 
      1. Having multiple Record Sets using the Weighted Routing Policy is not a requirement; it is more a matter of preference. It is possible to have just one Record Set and replace the ALB value when the switch is needed.
    2. Set the time to live (TTL) to the recommended 60 seconds, which will "minimize the amount of time it takes for traffic to stop being routed to your failed endpoint."
    3. Set the Routing Policy to Weighted:
      1. Assign a weight of 100 to the primary site.
      2. Assign a weight of 0 to the standby site.                     Public record set in Route 52 with Weighted Routing PolicyPublic record set in Route 52 with Weighted Routing Policy
  3. Create a Private Hosted Zone for each region using the same Domain Name as the Public Hosted Zone in Route 53.
    1. Attach the private hosted zone to the VPC assigned to your ALB during time of creation.List of hosted zones in Route 53
  4. In Route 53 under each of the Private Hosted Zones:
    1. Create an identical Record Set to match the one created in the Public Hosted Zone.
    2. TTL can remain as the default value of 300 seconds.
    3. The Routing Policy can remain the default value of Simple.Private record set in Route 52 with Simple Routing Policy Private record set in Route 52 with Simple Routing Policy

The last two steps of this pre-deployment process are vital to the success of the WebGIS DR utility. Having the two Private Hosted Zones with identical Domain Names and Record Sets that match the Record Sets in the Public Hosted Zone ensures the ability to configure identical ArcGIS Enterprise deployments. Additionally, the deployments will remain in an operational state to perform consistent backups and restores on the appropriate systems without issue due to being on separate regions and VPCs.

ArcGIS Enterprise Deployment

It is finally time to deploy the components of our architecture to EC2 instances (repeat for each region):

  1. To setup the base deployment (Portal for ArcGIS, ArcGIS Hosting Server, and ArcGIS Data Store), follow steps 1-20 from our documentation on deploying Portal for ArcGIS on AWS, which utilizes Esri’s Amazon Machine Images (AMIs).
    1. Make sure to assign the EC2 instances to the same VPC and Security Group as the ALB.
  2. Repeat steps 11-19 in the linked documentation from the previous step to setup the additional ArcGIS Geocode Server.
  3. Create the two remaining target groups for the ArcGIS Hosting Server and Geocode Server (the Portal target group was created during the ALB creation in the pre-deployment steps).
  4. Register each instance with its appropriate target group.
  5. Configure the appropriate forwarding rules in the ALB to match the target groups. If everything is setup correctly, we should be able to successfully access portal through the DNS alias (Name property) created in the Record Sets.ArcGIS Portal homepage with arrow pointing to URL
  6. Follow steps 21-23 in the linked documentation above using the DNS alias from the Record Sets. For example, my portal’s system properties would have the following entries: Portal system properties
    1. And federating my hosting server (or any others) would look like so (the admin URL may vary depending on the level of administrative access setting placed on the Web Adaptor):ArcGIS Server federation in Portal organizational settings.
  7. Repeat step 22 for the ArcGIS Geocode Server.

We now have two identical and fully functional ArcGIS Enterprise deployments in each region. The deployment that has a Weighted Policy of 100 in the Public Hosted Zone will be accessible over the internet and act as the primary site, while the other deployment (with Weighted Policy of 0) in the Public Hosted Zone can only be accessible within its own VPC and act as the standby site.

Active and standby ArcGIS Enterprise architectural designs

Replication

Now that we have our primary and standby sites, we have to setup two replicating S3 buckets in the appropriate regions to have a fully replicated and geographically redundant deployment prepared for a disaster recovery scenario.

In Amazon S3, create two buckets with easily recognizable naming conventions, like the following:

Amazon S3 buckets

In the second step of the creation process for each bucket under Configure Options, check the box under Versioning. This is a requirement to enable Cross-Region Replication. Once the buckets have been created, select the bucket that will be utilized by the primary site and navigate to Replication under the Management tab and select Get Started. We can leave the defaults selected for all settings throughout this process. We just need to ensure we select our designated standby site bucket for the destination.

Configuring the replication rule in Amazon S3 bucket

Amazon S3 provides a selectable option (seen in the screenshot above) called S3 Replication Time Control, which will replicate 99.99% of new objects within 15 minutes. This may be a valuable option for organizations with large-scale ArcGIS Enterprise deployments with lots of content who need minimal down time. In my tests, I have found replication to be fast without that option selected, granted my backups ran around 5-6 GBs, which amounted to ~300 hosted services and a Geocode service (and its data) copied to ArcGIS Server.

Disaster Recovery

We are now able to create backups of the primary site and restore the standby site from said backups.

  1. Create a backup with the WebGIS DR tool of your primary site on the instance where Portal is installed. Make sure to point to the appropriate S3 bucket (should be the one in the same region as the primary site) in the properties file. The backup will be replicated to the S3 bucket in the other region.
  2. Restore the replicated backup with the DR tool on the standby instance where Portal is installed. Be mindful of the appropriate S3 bucket in the properties file.

This is the final architecture and workflow:

Full workflow/architecture outlined with S3 bucket replication

In the event of a disaster, we now have the ability to “failover” to our warm, standby deployment by just toggling the values of the Weighted Policies of the Record Sets in our Public Hosted Zone with downtime dependent upon the TTL of the Route 53 record set.Additionally, depending on the length of downtime for the original hot site, you may need to modify how your backups and restores are handled on the two environments, so that any new items created in the new hot site can be maintained when your original data center is restored.

* It should be heavily emphasized that this workflow with the WebGIS DR tool is not considered a highly available ArcGIS Enterprise deployment. The “failover” may be quick in this scenario, but the standby deployment will only have content available from the last WebGIS DR backup/restore. Please see our documentation on configuring a highly available ArcGIS Enterprise.

ArcGIS Enterprise failover workflow/architecture

Since both deployments are also identical, there should be no issues with services integrated into other business systems. Most importantly, this entire workflow – running WebGIS DR backups and restores, DNS toggling, as well as detecting who is acting as the primary site at any given moment – can be completely scripted and automated, ensuring that mission critical systems remain operational.

Learn more about preventing data loss and downtime with ArcGIS Enterprise.

more
14 6 8,161
JacobBoyle412
Esri Contributor

At the Architecture Practice, we are getting a lot of questions about shared instance pools.  

Before 10.7.0, the solution for reducing the amount of RAM was to set low-utilization so that the minimum number of instances in that service’s dedicated pool to zero. By doing so, you allow ArcGIS Server to not run any ArcSOCs for the service if it hasn’t received any requests in a while. 

This “min-zero” solution eliminates the resource usage for services that are going unused. Because you can still set the maximum number of instances, you can accommodate services that receive infrequent bursts of traffic.  The next time the service gets a request, an ArcSOC powers up to handle it at the cost of the startup time of the ArcSOC.  Also, this service could then sit idle for a more extended period, consuming the started SOC until the service hits the set idle timeout.

At 10.7.0, Esri announced support for shared instance pools. Every ArcGIS Server Site now comes with a shared instance pool, containing four ArcSOC processes by default. This number can increase to accommodate more services. The shared instance pool utilizes all of the SOCs assigned to it, so you should only increase the pool as you need to.

Once a compatible map service has published to your ArcGIS Server Site, you can designate it to use the shared pool. Any service added to the shared instance pool will no longer have its dedicated pool; it will dip into the shared pool and use a SOC or two as needed. Once it’s done handling a request, that ArcSOC is free to be used by any other service in the shared pool.

The following restrictions limit what services can use the shared instance pool:

  • Only map services published from ArcGIS Pro can be configured to use the shared instance pool. Other service types, such as geoprocessing services, are not supported.
  • Only specific capabilities of map services—feature access, WFS, WMS, and KML—can be enabled. Turn off all other capabilities before continuing.
  • Services that have custom server object extensions (SOEs) or server object interceptors (SOIs) cannot use shared instances.
  • Services published from ArcMap cannot use shared instances.
  • Cached map services published from ArcGIS Pro that meet the above requirements can use shared instances.

For further information, please see:

Introducing shared instances in ArcGIS Server 10.7 

Configure service instance settings—ArcGIS Server Administration (Windows) | ArcGIS Enterprise 

more
3 2 4,333
JeffDeWeese
Esri Contributor

Interest has been expressed within the Esri community with having Esri provide a set of icons for building presentations, particularly those involving conceptual IT architectures. Esri recently released an initial set of "Esri Presentation Icons" which can be used to develop conceptual architecture diagrams to provide a unified view of GIS deployments. Attached is the initial set of icons that are being provided for customer usage.

The intended use of this icon set is to enhance PowerPoint presentations by illustrating core GIS concepts. These are not intended to be used for schematic drawings or highly detailed workflows.

more
36 24 22.1K
NoahMayer
Esri Contributor

Introduction

ArcGIS Monitor is designed to help you analyze and optimize the health of your ArcGIS implementation throughout the life cycle of your enterprise GIS. ArcGIS Monitor maximizes your GIS investment by providing timely and insightful system metrics on the status, availability, usage, system performance, and resource usage of your enterprise GIS. Alerts and analysis tools provide system administrators with real-time notifications to facilitate rapid resolution when measurements are outside defined system thresholds. Reports with statistics can be used to visualize historical data and enhance communications among GIS, IT, business owners, and senior management.

The ArcGIS Monitor Server application allows you to configure and export reports for your collections as Microsoft Excel (.xlsx) files. The ArcGIS Monitor Excel Report provides overall, dashboard-like view of your monitored GIS deployment, all in a single Excel file with the ease to navigate, sort and filter the data in a simple way.

For information about configuring and running the tool, please refer to ArcGIS Monitor documentation.

The Report Summary provides a view of all configured categories, e.g. Web, ArcGIS, Infrastructure and Site, Counter Type and Name, e.g. Web Requests Response Time, ArcGIS Services Summary, etc. You can navigate from this page to view counter details page by clicking on the desired link under the Name column.

Glossary of Report Summary page indications:

■  Indicates to investigate high utilization/load.

Indicates to investigate sporadic utilization spikes.

●  Indicates low utilization.

Configure and export reports

When you configure how to export the report, it is important to filter the report time span so it will include only busy time days and hours, for example, if the system is used mainly during business hours you should exclude Saturday and Sunday in Set Working Days and choose only business hours (e.g. 9 AM to 5 PM) in Set Working Days. For the purpose of system design, peak time usage and utilization is much more important than total usage.

Information Objectives for System Design

The Esri system design practice focuses on planning the hardware, software, and network characteristics for the future state of systems based on new or changing requirements.

The current health of an existing system will not necessarily have a strong relationship to a future system that has different requirements.  However, depending on the design objectives, information about the current system can be relevant.

For example, in the case of a planned migration from an on-premises system to a cloud platform, it would be quite useful to describe the current system such that it can be faithfully rendered on a cloud platform.  Or, capacity requirements driving a design may be derived from current system state, e.g. current services inventory, current system throughput, current resources utilization, plus the anticipated services and user growth over a defined term, e.g. two years.

 

Machine Resources and Utilization

It can be useful for system design to understand the current machine resources that support the system.  For example, if you are migrating a system to a cloud platform, the number of processor cores that the system has on premises has some relevance to the number you might deploy on the cloud.

Machines

Clicking on the Infrastructure Summary link in the Report Overview will lead you to the Infrastructure Summary details page. The page will list all monitored machines, with the following details:

  • Logical cores count
  • Physical cores count
  • Processor type
  • Total RAM
  • Virtual memory

Machines Utilization

The characteristics of the machines, and the configuration of the instances, offers incomplete insight into the degree to which machines resources are utilized and what resources are truly needed for the current workload, as a baseline for your system design.

 

Statistics Fields in Machines Utilization

Field

Definition

Min

Minimum percent utilization

Avg 

Average percent utilization

P5, P25, P50, P75

The percentile grouping of resource utilization

P95

The ninety-fifth percentile. Ninety-five percent of the time resource utilization value is lower than this value

P99

The ninety-ninth percentile. Ninety-nine percent of the time resource utilization value is lower than this value

Max

Maximum percent utilization

  

CPU

Clicking on the Infrastructure CPU Utilization link in the Report Overview will lead you to the CPU utilization details page. The page will list all monitored machines, with CPU utilization statistics.

We're going to focus on the P95 percentile. As we learned above, P95 signifies the CPU utilization for the top 5% busiest time. When P95 CPU utilization exceeds 90% it suggests that the machine is overloaded. In this case you should plan how to reduce the load on the machine by distributing the load or by adding more resources. This page will also help you to identify candidate machines with high CPU utilization, even if it’s below 90%, that might require additional resources or load distribution due to the anticipated user growth in your system design.

Current machines CPU utilization can also help you in validating your capacity calculations by comparing capacity calculation results for current usage with actual CPU utilization statistics in order to validate your capacity models before calculating capacity for the anticipated user growth.

Memory

Clicking on the Infrastructure Memory Physical Utilization link in the Report Overview will lead you to the physical memory utilization details page. The page will list all monitored machines, with memory utilization statistics.

For ArcGIS Enterprise system with default services configuration we would usually expect to see small changes in memory utilization, with some exceptions, e.g. geoprocessing services, services configured with higher number of max instances, etc. As with CPU utilization, we're going to focus on the P95 percentile. When P95 memory utilization exceeds 80% it suggests that the machine requires more memory. In this case you should plan how to reduce memory pressure on the machine. There are different ways to do that depending on the machine role, for example:

  • Portal – add more memory to the machine
  • Hosting Server - add more memory to the machine or add more machines to the site
  • Federated Server – use shared instances for less used map/feature services, add more memory to the machine, add more machines to the site, distribute services between sites (workload separation)


This page will also help you to identify candidate machines with high memory utilization, even if it’s below 80%, that might require you to plan for memory pressure alleviation due to the anticipated growth in usage or in the number of services in your system design.

Disk

Disk Utilization can help you identify current machines with potentially slow I/O and if storage upgrades are required.

Disk Space can give you the baseline for disk size requirements for the machines (i.e. not including shared storage) in your system design and identify if disk size has to be increased on existing machines if available disk space is low.

 

Network

Network Utilization can give you the baseline of current network usage for your system design.

 

Process

I recommend configuring Process counters in ArcGIS Monitor to monitor ArcSOC processes in federated ArcGIS Server machines.

Infrastructure Process Count page provides number of total ArcSOC process running on the machine, i.e. the number of service instances. This will help to identify ArcGIS Server usage patterns – is number of service instances steady or volatile? Does the number of service instances during peak time exceed 200? If so, it can threaten the stability of the site, and action must be taken:


1. Tune services and reduce number max instances per service. ArcGIS Services Requests/sec and Instances information (details below) can help with tuning services with the right number of instances.


2. Configure less used map and feature services to use shared instances. ArcGIS Services Count and Requests/sec (details below) can help with identifying candidate services for shared instances configuration.


3. Configure Windows registry to allow more service instances (See this technical article for more information and specific steps: https://support.esri.com/technical-article/000001218)

Process count can also provide baseline for number of services in your system design, to prepare for anticipated growth in number of services and plan services configuration.

ArcGIS Services

It can be useful for system design to understand the current ArcGIS Server services inventory, usage and performance. 

 

ArcGIS Services Summary

ArcGIS Services Summary provides ArcGIS Server services inventory including services configuration, e.g. started/stopped, types of services, etc., as a baseline for services configuration in your system design.


ArcGIS Services Count and Requests per Second

ArcGIS Services Count and Requests per Second provides baseline of current system throughput for your system design, as well as ArcGIS Server services usage information, e.g. most used services, less used services and unused services, for designing services configuration and help tuning services.

ArcGIS Services Instances

ArcGIS Services Instances information is not important for system design but can help with tuning services, e.g. number of min and max service instances for federated services.

ArcGIS Services Response Time

ArcGIS Services Response Time information can be used for capacity planning in your system design, if you are creating custom workflows in the capacity planner.

This information can also be used for optimizing current system by identifying slow-performing services. In the example above, I’ve sorted P95 elapsed time from largest to smallest, and highlighted any elapsed time over 1/2 second in orange. These are the services and layers I'd focus on optimizing, getting the P95 value below 1/2 second if possible.

 Note: The contents presented above are recommendations that will typically improve performance for many scenarios. However, in some cases, these recommendations may not produce better performance results, in which case, additional performance testing and system configuration modifications may be needed.

 

I hope you find this helpful, do not hesitate to post your questions here: ArcGIS Architecture Series: Tools of an Architect

more
8 2 4,119
MichaelHatcher
Occasional Contributor

In this entry, we will be looking at what a deployment looks like from the infrastructure as code (IaC) perspective with Terraform as well as the configuration management side with PowerShell DSC (Desired State Configuration). Both play important roles in automating ArcGIS Enterprise deployments, so let's jump in.

This deployment will follow a single machine model as described in the ArcGIS Enterprise documentation. It will consist of the following.

  • Portal for ArcGIS 10.7.1

  • ArcGIS Server 10.7.1 (Set as Hosting Server)

    • Services Directory is disabled

  • ArcGIS Data Store 10.7.1 (Relational Storage)

  • Two (2) Web Adaptors within IIS

    • Portal context: portal

    • Server context: hosted

    • Self-Signed certificate matching the public DNS

Additional Configurations

  • WebGISDR is configured to perform weekly full backups that are stored within Azure Blob Storage via Task Scheduler

  • Virtual machine is configured for nightly backups to an Azure Recovery Services Vault

  • RDP (3389) access is restricted via Network Security Group to the public IP of the box in which Terraform is ran from.

  • Internet access (80, 443) is configured for ArcGIS Enterprise via Network Security Group

  • Azure Anti-malware is configured for the virtual machine

The complete code and configurations can be found attached below. You will need to provide your own ArcGIS Enterprise licenses however.

Note:   This is post two (2) in a series on engineering with ArcGIS.

Infrastructure Deployment

If you are not already familiar with Terraform and how it can be used to efficiently handle the lifecycle of your infrastructure, I would recommend taking the time to read through the first entry in this series which can be found here. The Terraform code in that first entry will be used as the basis for the work that will be done in this posting.

As discussed in the first entry, Terraform is a tool designed to help manage the lifecycle of your infrastructure. Instead of rehashing the benefits of Terraform this time however, we will jump straight into the code and review what is being done. As mentioned above, the template from the first entry in this series is used again here with additional code added to perform specific actions needed for configuring ArcGIS Enterprise. Let's take a look at those additions.

These additions handle the creation of two blob containers that will be used for uploading deployment resources ("artifacts") and a empty container ("webgisdr") that will be used when configuring webgisdr backups along with the uploading of the license files, the PowerShell DSC archive and lastly, the web adaptor installer.

resource "azurerm_storage_container" "artifacts" {
  name                  = "${var.deployInfo["projectName"]}${var.deployInfo["environment"]}-deployment"
  resource_group_name   = "${azurerm_resource_group.rg.name}"
  storage_account_name  = "${azurerm_storage_account.storage.name}"
  container_access_type = "private"
}

resource "azurerm_storage_container" "webgisdr" {
  name                  = "webgisdr"
  resource_group_name   = "${azurerm_resource_group.rg.name}"
  storage_account_name  = "${azurerm_storage_account.storage.name}"
  container_access_type = "private"
}

resource "azurerm_storage_blob" "serverLicense" {
  name                   = "${var.deployInfo["serverLicenseFileName"]}"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./${var.deployInfo["serverLicenseFileName"]}"
}

resource "azurerm_storage_blob" "portalLicense" {
  name                   = "${var.deployInfo["portalLicenseFileName"]}"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./${var.deployInfo["portalLicenseFileName"]}"
}

resource "azurerm_storage_blob" "dscResources" {
  name                   = "dsc.zip"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./dsc.zip"
}

resource "azurerm_storage_blob" "webAdaptorInstaller" {
  name                   = "${var.deployInfo["marketplaceImageVersion"]}-iiswebadaptor.exe"
  resource_group_name    = "${azurerm_resource_group.rg.name}"
  storage_account_name   = "${azurerm_storage_account.storage.name}"
  storage_container_name = "${azurerm_storage_container.artifacts.name}"
  type                   = "block"
  source                 = "./${var.deployInfo["marketplaceImageVersion"]}-iiswebadaptor.exe"
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

This addition handles the generation of a short-lived SAS token from the storage account that is then used during the configuration management portion to actually grab the needed files from storage securely. In this situation, we could simplify the deployment by marking our containers as public and not requiring a token but that is not recommended.

data "azurerm_storage_account_sas" "token" {
  connection_string = "${azurerm_storage_account.storage.primary_connection_string}"
  https_only        = true
  start             = "${timestamp()}"
  expiry            = "${timeadd(timestamp(), "5h")}"

  resource_types {
    service   = false
    container = false
    object    = true
  }

  services {
    blob  = true
    queue = false
    table = false
    file  = false
  }

  permissions {
    read    = true
    write   = true
    delete  = true
    list    = true
    add     = true
    create  = true
    update  = true
    process = true
  }
}
‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

The final change is the addition of an extension to the virtual machine that will handle the configuration management task using PowerShell DSC. Instead of reviewing this in-depth here, just know that the data that is getting passed under the settings and protected_settings json will be passed to PowerShell DSC as parameters for use as needed by the configuration file.

resource "azurerm_virtual_machine_extension" "arcgisEnterprise-dsc" {
  name                       = "dsc"
  location                   = "${azurerm_resource_group.rg.location}"
  resource_group_name        = "${azurerm_resource_group.rg.name}"
  virtual_machine_name       = "${element(azurerm_virtual_machine.arcgisEnterprise.*.name, count.index)}"
  publisher                  = "Microsoft.Powershell"
  type                       = "DSC"
  type_handler_version       = "2.9"
  auto_upgrade_minor_version = true
  count                      = "${var.arcgisEnterpriseSpecs["count"]}"

  settings = <<SETTINGS
	{
		"configuration": {
          "url": "${azurerm_storage_blob.dscResources.url}${data.azurerm_storage_account_sas.token.sas}",
          "function": "enterprise",
		  "script": "enterprise.ps1"
		},
		"configurationArguments": {
          "webAdaptorUrl": "${azurerm_storage_blob.webAdaptorInstaller.url}${data.azurerm_storage_account_sas.token.sas}",
          "serverLicenseUrl": "${azurerm_storage_blob.serverLicense.url}${data.azurerm_storage_account_sas.token.sas}",
          "portalLicenseUrl": "${azurerm_storage_blob.portalLicense.url}${data.azurerm_storage_account_sas.token.sas}",
          "externalDNS": "${azurerm_public_ip.arcgisEnterprise.fqdn}",
		  "arcgisVersion" : "${var.deployInfo["marketplaceImageVersion"]}",
          "BlobStorageAccountName": "${azurerm_storage_account.storage.name}",
          "BlobContainerName": "${azurerm_storage_container.webgisdr.name}",
          "BlobStorageKey": "${azurerm_storage_account.storage.primary_access_key}"
      }
	}
	SETTINGS
  protected_settings = <<PROTECTED_SETTINGS
	{
		"configurationArguments": {
          "serviceAccountCredential": {
            "username": "${var.deployInfo["serviceAccountUsername"]}",
            "password": "${var.deployInfo["serviceAccountPassword"]}"
      },
		"arcgisAdminCredential": {
			"username": "${var.deployInfo["arcgisAdminUsername"]}",
			"password": "${var.deployInfo["arcgisAdminPassword"]}"
			}
		}
	}
	PROTECTED_SETTINGS
}‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Configuration Management

As was touched on above, we are utilizing PowerShell DSC (Desired State Configuration) to handle the configuration of ArcGIS Enterprise as well as a few other tasks on the instance. To simplify things, I have included v2.1 of the ArcGIS module within the archive but the public repo can be found here. The ArcGIS module provides a means with which to interact with ArcGIS Enterprise in a controlled manner by provided various "resources" that perform specific tasks. One of the major benefits of PowerShell DSC is that it is idempotent. This means that we can continually run our configuration and nothing will be modified if the system matches our code. This provides administrators the ability to push changes and updates without altering existing resources as well as detecting configuration drift over time. 

To highlight the use of one of these resources, let's take a quick look at the ArcGIS_Portal resource which is designed to  configure a new site without having to manually do so through the typical browser based workflow. In this deployment, our ArcGIS_Portal resource looks exactly like the below code. The resource specifies the parameters that we must be provided to successfully configure the portal site and will error out if all required parameters are not provided. 

ArcGIS_Portal arcgisPortal {
    PortalEndPoint = (Get-FQDN $env:COMPUTERNAME)
    PortalContext = 'portal'
    ExternalDNSName = $externalDNS
    Ensure = 'Present'
    PortalAdministrator = $arcgisAdminCredential
    AdminEMail = 'example@esri.com' 
    AdminSecurityQuestionIndex = '12'
    AdminSecurityAnswer = 'none'
    ContentDirectoryLocation = $portalContentLocation
    LicenseFilePath = (Join-Path $(Get-Location).Path (Get-FileNameFromUrl $portalLicenseUrl))
    DependsOn = $Depends
}
$Depends += '[ArcGIS_Portal]arcgisPortal'‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Because of the scope of what is being done within the configuration script here, we will not be doing a deep dive. This will come in a later article.

Putting it together

With the changes to the Terraform template as well as a very high level overview of PowerShell DSC and its purpose, we can deploy the environment using the same commands mentioned in the first entry in the series. Within the terminal of your choosing, navigate into the extracted archive that contains your licenses, template and DSC archive, and start by initializing Terraform with the following command.

terraform init

Next, you can run the following to start the deployment process. Keep in mind, we are not only deploying the infrastructure but configuring ArcGIS Enterprise so the time for completion will vary. When it completes, it will output the public facing url to access your ArcGIS Enterprise portal.

terraform apply


Summary

As you should quickly be able to see, removing the manual configuration aspects of software as well as the deployment of infrastructure, a large portion of problems can be mitigated by moving toward IaC and Configuration Management. There are many solutions out there to handle both aspects and these are just two options. Explore what works for you and start moving toward a more DevOps centric approach.

I hope you find this helpful, do not hesitate to post your questions here: Engineering ArcGIS Series: Tools of an Engineer 

 

Note: The contents presented above are examples and should be reviewed and modified as needed for each specific environment.

more
3 0 7,253
AhmadAbdallah
Esri Contributor

It’s a long-time dilemma for many organizations whether to devote resources to custom software application development or go for the commercial off-the-shelf (COTS) solution? In this blog post, we will highlight some main fundamental strategies that will help any GIS department choose what will best fit its user needs.


In short, the answer to this question also links us back to what the requirements are and what current infrastructure is. An application implementation strategy is an approach to delivering capabilities that meet your business needs with technology.

Gathering Requirements is crucial for the success of your application whether it's COTS or custom. You should closely work with the sponsor, stakeholders, users, and IT focusing on the requirements, not the solution. Types of the requirements you will have to gather are:

  • Business
High-level vision statements (ex. Share information with the public)
  • Functional
What the application should do

(from a user perspective)

  • Non-functional
How the application does it (usability, security, performance, etc.)

Following the requirements, there are many other factors to consider when deciding the best way to deliver new capabilities through apps. These factors include resourcing, initial development effort, ongoing app maintenance, user training, and technical support. In addition, users now expect frequent updates to their apps, which increases demand for resources to develop and maintain custom apps. As a result, it’s best to select the approach that delivers the capabilities you need with the least cost and effort. An ideal strategy will minimize cost and optimize the use of development resources.

By applying a “configure first” philosophy that prioritizes commercial off-the-shelf (COTS) apps and least-effort design patterns, you can reduce the cost and effort needed to deploy and maintain applications for your users. Organizations that adopt a configure-first philosophy start by configuring COTS apps, then extend and customize apps only when needed. Using this least-effort approach in your application implementation strategy lets you deliver capabilities faster and reserve your development resources for more complex tasks.

Depending on your specific requirements, you can:

  • Configure COTS apps to meet your business needs. ArcGIS provides many configurable COTS apps that support key workflows out of the box. Using COTS apps requires the least effort and the lowest ongoing cost. ArcGIS COTS Apps (Web App Builder, Story Maps, Operational Dashboards, Collector, GeoForm, and Survey 123)

  • Extend existing apps, either by modifying templates or by creating widgets for COTS apps. Esri offers app templates at solutions.arcgis.com and github.com/esri that provide focused solutions for specific problems; you can modify the source code for these templates to add discrete capabilities. In addition, several ArcGIS COTS apps use modular frameworks that let you create custom widgets and plug them into the apps. Extending existing apps lets you develop only the additional functionality you need, saving money and effort.

  • Customize apps using ArcGIS APIs and SDKs. These APIs and SDKs provide objects like the Identity Manager to manage credentials within custom apps that expose parts of the ArcGIS platform (such as secure web maps). Because you don’t have to code those parts yourself, you can build business-focused apps to take advantage of ArcGIS COTS capabilities, reducing the overhead for app development and maintenance. Check out developers.esri.com for more information on the wide selection of APIs and SDKs available.

In conclusion, to establish an effective application implementation strategy for your organization, look deep into your requirements and available resources, and try to follow these 3 simple guidelines:

  1. Adopt a configure-first philosophy, configuring COTS apps when possible to deliver the capabilities you need.
  2. If you have a requirement that cannot be met with configuration alone, extend existing apps with discrete capabilities and widgets.
  3. When you need capabilities that you can’t provide by configuring and extending existing apps, customize apps using ArcGIS APIs and SDKs.

more
4 0 3,147
RaymondBunn
Esri Contributor

Numerous technical sessions, spotlight talks, and conversations referenced the "Architecting the ArcGIS Platform: Best Practices" whitepaper (https://go.esri.com/bp). This document presents some implementation guidelines in the form of a conceptual reference architecture diagram and associated best practice briefs. You can use these guidelines to maximize the value of your ArcGIS implementation and meet your organizational goals.

more
4 0 992
JeffDeWeese
Esri Contributor

GIS has evolved at a rapid pace ever since computers took up to the challenge of providing spatial capabilities. The evolution of GIS from a Map creation platform to a Location Intelligence platform necessitates the need to build systems that are robust, reliable, and elastic as they are utilized to host solutions that your business relies on to be successful. GIS is everywhere running from servers running within traditional data centers, the Cloud, and from mobile and IoT devices. Supporting such a diverse landscape creates unique architectural challenges that require a systematic approach to designing your GIS.

Esri system architecture design is based upon traditional architecture patterns centered around multiple tiers, namely a Client/Application, Presentation, Services, Support, and Data tier. Each tier aligns with Esri products and solution components as depicted by the example logical architecture below. Following a systematic approach, this blog series will explore the various architectural tiers and their related solution components in support of building modern GIS platforms to meet today's business and Location Intelligence needs.

 

AGE Conceptual.png

more
10 0 4,057
MichaelHatcher
Occasional Contributor

What is Infrastructure as Code?

Taken directly from Microsoft, 

"Infrastructure as Code (IaC) is the management of infrastructure (networks, virtual machines, load balancers, and connection topology) in a descriptive model".

In the simplest terms, Infrastructure as Code (IaC) is a methodology to begin treating cloud infrastructure the same as application source code. No longer are changes made directly to the infrastructure, but to the source code for a given environment or deployment. This change in behavior will lead to environments that are easily reproducible, quickly deployable and accurate.

Note:   This post is one (1) in a series on engineering with ArcGIS.

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage components such as virtual machines, networking, storage, firewall rules and many others. To define what needs to be deployed or changed, terraform uses what is called a terraform configuration which can be made up of one or more individual files within the same folder. Terraform files utilize the file extension

.tf

as well as HCL (HashiCorp Configuration Language) as the configuration language. HCL is a structured configuration language that is both human and machine friendly for use with command-line tools, but specifically targeted towards devops tools, servers, etc. 

In complex deployments, administrators may find they can utilize a different configuration file for each aspect of their environment as such,

/environment_a

   /production

      network.tf

      machines.tf

      storage.tf

      security.tf

   /staging

      network.tf

      machines.tf

      storage.tf

      security.tf

whereas with simple deployments, a single file may be sufficient, such as

/environment_a

   main.tf


To actually create and manage infrastructure, terraform has a number of constructs to allow users to define Infrastructure as Code but the most important two are Providers and Resources. 

Resources

Resources are the mechanism that tell terraform how the infrastructure should be deployed and configured. Each cloud provider will have its own list of Resources that users will have access to. An example would look something like this which creates an azure resource group and a virtual network within.

resource "azurerm_resource_group" "rg" {
   name = "prd"
   location = "westus2"

}

resource "azurerm_virtual_network" "vnet" {
   name = "prd-vnet"
   location = "westus2"
   address_space = ["10.0.0.0/16"]
   resource_group_name = "prd"
}

Providers

Providers are the mechanism for defining what cloud provider or on-premise Resources are available for use. Each provider offers a set resources, and defines for each resource which arguments it accepts, which attributes it exports, and how changes to resources of that type are actually applied to remote APIs. Most of the available providers correspond to one cloud or on-premises infrastructure platform, and offer resource types that correspond to each of the features of that platform. In simpler terms, if the goal is to define infrastructure on Azure, an Azure based provider must be used before Resources can be defined.

 

A provider example for Azure would look something like this,

provider "azurerm" {
   subscription_id = "this-is-not-a-real-subscription-id"
   client_id = "this-is-not-a-real-client-id"
   client_secret = "this-is-not-a-real-client-secret"
   tenant_id = "this-is-not-a-real-tenant-id"
}

Terraform has multiple methods for authenticating to a given cloud provider and in this example, a Service Principal is being utilized.

State

Lastly, terraform makes use of a State File that keeps track of the infrastructure that has been deployed and configured. This state file is what allows terraform to run checks against the last recorded state of an environment compared to the current run and provide users the delta so validation can be done before making changes. This aspect is very important in that it allows terraform to be idempotent, which is a key aspect of IaC.


Infrastructure Life-cycle

For the purposes of this introduction, we will be using the attached terraform template as the basis for the following examples. This template will be posted to GitHub in the near future for future articles to utilize and build off. It is designed to deploy the following within an Azure subscription.

Before it can be deployed, the deployInfo variable group (Map) will need to be populated. When creating the Service Principal, the following documentation (Service Principal creation) can be reference for assistance. Terraform will also need to be available locally. The steps to ensure it is can be found here.

  • Resource Group
  • Virtual Network
  • Subnet
  • Network Security Group
    • Rule to allow 80 and 443 traffic from the internet into the virtual network
    • Rule to allow RDP access (3389) into the virtual network from the public IP of the person who deploys the template. This is accomplished by querying the web during deployment and retrieving your public IP. This workflow is not recommended in production environments and is only being used for example purposes.
    • Rule to block all other internet traffic into the virtual network.
  • Storage account
  • Availability Set
  • Public IP
  • Network Interface
  • Virtual Machine
    • Windows Defender Extension
    • Recovery Services Vault Extension
  • Key Vault
  • Recovery Services Vault
    • Backup Policy

Creation

Once terraform is available locally and the deployInfo variable map has been completed, the first step in deploying infrastructure is to initialize terraform. This can be accomplished by navigating to the directory in which you have saved the above template with the .tf file extension and running the following command, which will prepare various local settings and data that will be used by subsequent commands.

terraform init

The output from initializing terraform will resemble the following.

With terraform successfully initialized, the next step in the process is to have terraform review the template and determine what changes need to take place. This step will have terraform compare the template with the current state file and produce an output showing the deltas as follows. To do so, use the following command.

terraform plan

Output has been truncated.

As this is the first run of terraform, terraform is only able to see resources it needs to add (create). Later in this post, we will walk through updating existing resources.

With terraform successfully prepared to deploy our infrastructure, we can being the deployment by using the following command which will start the process of creating the resources within Azure and provide the following output when complete.

terraform apply

Updates

As the deployment is utilized, it may be determined that the current infrastructure is not adequately sized and resources need to be increased. As our resources are defined as code, so is the virtual machine sizing. Within the "arcgisEnterpriseSpecs" variable map, a variable is defined with the name "size" which is the Azure machine sizing that is used by terraform when deploying resources.

If this variable is changed to "Standard_D8s_v3" and terraform plan is ran again, it will detect a delta between what is defined in code and what is actually deployed within Azure and present this in the output as such.

Once this delta is planned and the changes are ready to be pushed to the actual infrastructure within Azure, simply running terraform apply again will begin the process.

Decommission

The last phase of a given deployments life-cycle is decommissioning. Once infrastructure has reached the end of it life, it must be terminated and removed and thankfully, terraform provides an easy method to do so with the following command.

terraform destroy

The output of destroying the infrastructure will resemble the following.

In conclusion, as teams aim to become more agile and move at a much faster pace, moving to modern methodologies such as infrastructure as code (IaC) is a great first step and should not be viewed as unnecessary but a crucial step in the right direction.

I hope you find this helpful, do not hesitate to post your questions here: https://community.esri.com/community/implementing-arcgis/blog/2019/07/03/engineering-arcgis-series-t... 

 

Note: The contents presented above are examples and should be reviewed and modified as needed for each specific environment.

more
7 1 5,959
123 Subscribers